Current Issue : April-June Volume : 2026 Issue Number : 2 Articles : 5 Articles
Brain–computer interfaces (BCIs) can be applied to interact with humans and intelligent machines more effectively. This paper explores the feasibility of fusing humans' and machines' intelligence by BCI to support the collaborations of humans and machines, and its primary objective is to identify critical hurdles of adopting BCIs for human–machine interactions (HMIs) in a complex environment. Theoretical fundamentals, available hardware and software, and existing applications of BCIs in smart machines are discussed thoroughly, and the focus was on the challenges in (1) detecting and interpreting humans' intents and (2) utilizing humans' intents in real-time controls of machines. As a conclusion, a hybrid supervisory control is proposed to fuse the intelligence of humans and machines to control an unmanned aerial vehicle (UAV); it fuses humans' intelligence with artificial intelligence (AI) to enhance robustness and survivability....
Since service robots serving as salespersons are expected to be deployed efficiently and sustainably in retail environments, this paper explores the impacts of their interaction cues on customer experiences within small-scale self-service shops. The corresponding customer experiences are discussed in terms of fluency, comfort and likability. We analyzed customers’ shopping behaviors and designed fourteen body gestures for the robots, giving them the ability to select appropriate movements for different stages in shopping. Two experimental scenarios with and without robots were designed. For the scenario involving robots, eight cases with distinct interaction cues were implemented. Participants were recruited to measure their experiences, and statistical methods including repeated-measures ANOVA, regression analysis, etc., were used to analyze the data. The results indicate that robots solely reliant on voice interaction are unable to significantly enhance the fluency, comfort and likability effects experienced by customers. Combining a robot’s voice with the ability to imitate a human salesperson’s body movements is a feasible way to truly improve these customer experiences, and a robot’s body movements can positively influence these customer experiences in human–robot interactions (HRIs) while the use of colored light cannot. We also compiled design strategies for robot interaction cues from the perspectives of cost and controllable design. Furthermore, the relationships between fluency, comfort and likability were discussed, thereby providing meaningful insights for HRIs aimed at enhancing customer experiences....
By leveraging retrieval-augmented generation technology in human–computer interaction applications, a large language model was established based on local hardware to construct a root cause query model for equipment failures on virtual power plant intelligent operation and maintenance platforms. This enhances the efficiency of maintenance personnel in retrieving troubleshooting solutions from vast technical documentation. First, the virtual power plant architecture was established, clarifying the functions of each layer and defining the flow of information and commands between them. Subsequently, the time-based workflow and corresponding functional modules and sub-functions of its core smart operation and maintenance platform were analyzed. Then a root cause query model for equipment failures was developed on the local hardware platform. A knowledge base for equipment failure root causes was constructed. During deployment, two large models were selected for performance comparison. After comparative experiments, performance of RAG varied across different models, requiring careful selection based on hardware and environment to determine whether RAG technology should be employed....
This paper presents the design and implementation of an XY plotter system for playing Tic-Tac-Toe against a human opponent. The mechatronic system utilizes stepper motors controlled via a microcontroller and a CNC module, enabling precise bidirectional movement. A vision-based algorithm detects user moves and processes game logic through a Minimax strategy for optimal decision-making. The study highlights the integration of robotics and human–computer interaction, demonstrating potential applications in automation, education, and interactive entertainment. Experimental results validate the system’s accuracy and efficiency in real-time gameplay scenarios. Additionally, the work emphasizes the reliability and predictability of a mathematics-based approach—embodied by the deterministic Minimax algorithm—over AI-driven methods, which may involve uncertainties or probabilistic failure. This highlights the advantage of using well-defined algorithmic logic for tasks requiring consistent performance and outcome guarantees....
Gesture recognition is a key task in the field of human–computer interaction (HCI). To solve the problems of low accuracy and poor real‐time performance in the recognition process, this paper designs a HCI system based on gesture recognition. This paper utilises the Ultraleap 3Di to collect the dynamic gesture dataset for the defined interaction gestures, and the high‐precision device guarantees data collection. This paper constructs a framework incorporating the advantages of convolutional neural networks (CNNs) and long short‐term memory networks (LSTM) using noncontact gesture interaction as the medium of human–computer collaboration. The framework utilises CNN to perform feature extraction on the input frame information. Then, the extracted feature sequences are fed into LSTM to process the timing information, which is very effective in classifying and recognising the defined dynamic gestures. Finally, a HCI system based on gesture recognition is designed. Based on the Unity3D platform, the UR5 robotic arm was modelled and the cyclic coordinate descent (CCD) algorithm was applied to solve the inverse kinematics, successfully realising the semantic control of gestures on the UR5 robotic arm. The experiment verifies that the CNN–LSTM network can ensure the real‐time performance of the whole system and the effectiveness and reliability of the gesture interaction system based on Ultraleap 3Di....
Loading....